Learning Multivariate Log-concave Distributions

نویسندگان

  • Ilias Diakonikolas
  • Daniel M. Kane
  • Alistair Stewart
چکیده

We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on Rd, for all d ≥ 1. Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of d > 3. In more detail, we give an estimator that, for any d ≥ 1 and ǫ > 0, draws Õd ( (1/ǫ)(d+5)/2 ) samples from an unknown target log-concave density on Rd, and outputs a hypothesis that (with high probability) is ǫ-close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of Ωd ( (1/ǫ)(d+1)/2 ) for this problem.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness

Polynomial approximations to boolean functions have led to many positive results in computer science. In particular, polynomial approximations to the sign function underly algorithms for agnostically learning halfspaces, as well as pseudorandom generators for halfspaces. In this work, we investigate the limits of these techniques by proving inapproximability results for the sign function. First...

متن کامل

Learning mixtures of structured distributions over discrete domains

Let C be a class of probability distributions over the discrete domain [n] = {1, . . . , n}. We show that if C satisfies a rather general condition – essentially, that each distribution in C can be well-approximated by a variable-width histogram with few bins – then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of k unknow...

متن کامل

Sample and Computationally Efficient Learning Algorithms under S-Concave Distributions

We provide new results for noise-tolerant and sample-efficient learning algorithms under s-concavedistributions. The new class of s-concave distributions is a broad and natural generalization of log-concavity, and includes many important additional distributions, e.g., the Pareto distribution and t-distribution. This class has been studied in the context of efficient sampling, i...

متن کامل

Active and passive learning of linear separators under log-concave distributions

We provide new results concerning label efficient, polynomial time, passive and active learning of linear separators. We prove that active learning provides an exponential improvement over PAC (passive) learning of homogeneous linear separators under nearly log-concave distributions. Building on this, we provide a computationally efficient PAC algorithm with optimal (up to a constant factor) sa...

متن کامل

Learning Halfspaces with Malicious Noise

We give new algorithms for learning halfspaces in the challenging malicious noise model, where an adversary may corrupt both the labels and the underlying distribution of examples. Our algorithms can tolerate malicious noise rates exponentially larger than previous work in terms of the dependence on the dimension n, and succeed for the fairly broad class of all isotropic log-concave distributio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017